schaake8 Getty Images
en English

The New Tech Risk Environment

Around the world, policymakers are coming to grips with the implications of artificial intelligence and its role in the broader digital and tech ecosystem. While different countries have different priorities and governance structures, all have reasons to worry about the technology's potential to cause harm.

Since serving for a decade as a member of the European Parliament from the Netherlands, Marietje Schaake has become a leading transatlantic voice in debates about technology and the policy responses to digital innovations. Now the international policy director at Stanford University’s Cyber Policy Center and international policy fellow at Stanford HAI (Human-Centered Artificial Intelligence), she weighs in regularly on the risks of privately governed AI, social media- and AI-augmented disinformation, and related topics, including as a member of the United Nations AI Advisory Body’s Executive Committee.

Project Syndicate: The UN’s AI Advisory Body recently released its interim report. What message have you been stressing in those meetings?

Marietje Schaake: The UN has a vital role to play in giving new meaning to the UN Charter – the basis of human rights, international law, and global legitimacy – in this new era of disruption. It is particularly important to have a truly global outlook and to consider the lived experiences and interests of people in the Global South. That is too often missing in tech-policy analysis and proposals, so I am excited about the difference the UN can make in AI governance.

PS: Have there been disagreements among global policymakers – or between advanced and developing economies – about which AI-governance issues should take top priority?

MS: We all have seen the cut-throat competition between the United States and China over the past years. The differences between democracies and autocracies are inevitably also playing out in the way governments approach AI. Another division is between countries that can focus on regulation and investment, and the many governments of developing economies that are concerned more about access and inclusion.

These unique economic and social contexts need to be appreciated when we analyze the impact of AI. We are dealing with a technology that can be deployed both to advance scientific breakthroughs and to intensify state surveillance. It would help to see policymakers address more of these specifics.

PS: As always, Europe seems to be ahead of the US when it comes to AI regulation. Could you briefly walk us through the main strengths and weaknesses of the draft AI Act, and what regulators elsewhere might learn from Europe’s recent legislative debate?

MS: The European Union has indeed been ahead of the curve. In fact, when work started on the AI Act, many cautioned that it was too soon. But now, with market developments racing ahead, one could say that the recent political agreement on the law came in the nick of time.

The AI Act is primarily designed to mitigate the risks associated with how AI applications are used. Only at a later stage did lawmakers add requirements for foundation models (the underlying trained datasets that power all the chatbots and other AI tools being released into the market). Those later provisions represent an effort to regulate the technology itself. While I see the EU as an important values-based frontrunner in regulating AI, the tensions between regulating uses and regulating the technology have not been resolved. That is something all regulators will have to deal with sooner or later.

Subscribe to PS Digital
PS_Digital_1333x1000_Intro-Offer1

Subscribe to PS Digital

Access every new PS commentary, our entire On Point suite of subscriber-exclusive content – including Longer Reads, Insider Interviews, Big Picture/Big Question, and Say More – and the full PS archive.

Subscribe Now

There is also the often-overlooked matter of enforcement. New resources and capabilities will need to be freed up to help oversight bodies ensure that the law has teeth. One of the AI Act’s obvious weaknesses – which is the result of how the EU itself is designed – is that it cannot deal with military and national-security matters. Those crucial issues will remain the competence of individual member states, even though they are already key factors in how AI is governed in the US and China. I hope the EU will focus more in the coming years on the intersection of technology and geopolitics.

PS: What do you say to those in tech, venture capital, and libertarian-adjacent circles who argue that Europe’s approach has made it an innovation laggard compared to the US and China?

MS: I would ask them whether they believe innovation trumps the need to protect human rights and democracy. I am convinced that innovation should not be a society’s paramount objective. Just look at the US. For decades, it has trusted markets to deliver good outcomes, yet American society and political life are now reeling from the harms of unregulated technologies. Disinformation is rampant; cyber attackers regularly exploit poorly protected software, at massive cost; “gig workers” remain economically vulnerable. It is time we question the ideal of unbridled innovation.

As for China, technological innovation remains heavily controlled and instrumentalized by the state. I don’t know anyone in the EU who is inspired by that model.

PS: A common concern about AI is that it will cause widespread job displacement. Do you consider these fears justified? What do you see as the single most powerful response that policymakers could offer to alleviate these concerns?

MS: Yes, concerns about technological unemployment are justified, judging from a series of studies conducted by the International Monetary Fund, McKinsey & Company, and Goldman Sachs, among others. All foresee major effects on jobs, and surveys suggest that AI is already causing job losses. Not everyone has the foresight and convening power that the Writers Guild of America had last year when its members went on strike to prevent the use of AI to replace workers.

Governments are well advised to conduct scenario studies on specific sectors’ exposure to AI disruption. The picture will likely be quite mixed, and it will differ from historical examples, in the sense that higher-educated people face more risk today. In the past, technological revolutions tended to affect blue-collar workers the most.

For all the talk of AI-related job losses, I see remarkably few policy initiatives. Besides scenario planning, discussions about taxing ever-wealthier AI companies should start now.

The Information Ecosystem

PS: Since Europe’s General Data Protection Regulation entered into force to much fanfare in 2018, the law has drawn much criticism for being difficult to interpret, implement, and enforce, and for generally falling short of expectations. Are these concerns justified? What do critics get right, and what are they missing?

MS: With the amount of hype surrounding the GDPR, it could only disappoint. In practice, it will always be challenging to harmonize 27 national data-protection regimes into one framework that applies to everyone – governments and companies alike. The good news is that EU leaders foresaw the need for periodic reviews and improvements, which are now being undertaken. As with AI, enforcement will need to be shored up to ensure the GDPR does not go down in history as a paper tiger.

PS: In an ideal world, what kind of disinformation safeguards would you like to see ahead of the European Parliament and US national elections this year? Are we still ultimately left with no choice but to trust figures like Elon Musk and Mark Zuckerberg to police election interference campaigns?

MS: Unfortunately, companies (with their own changing policies and priorities) are setting the guardrails of our information ecosystem. Many have laid off or substantially downsized their “trust and safety” teams. Even worse, YouTube declared last year that, as a matter of policy, it will no longer remove or take action against videos peddling blatant lies about the 2020 election. It will not have escaped anyone that those lies form the basis for Donald Trump’s 2024 election campaign. Not only are disinformation researchers being politically targeted and sidelined, but many recent measures designed to improve the conditions of online debate are being reversed.

On top of that, AI – and particularly generative AI – could be a gamechanger for elections worldwide, given its ability to generate effectively infinite volumes of disinformation and target that information more precisely. We urgently need more transparency so that independent researchers can study the effects of changes in corporate policies. Right now, most of that information and data is shielded behind intellectual-property protections.

For its part, the EU is taking important steps to prevent the abuse of “dark patterns” (deceptive user interfaces designed to trick people into making harmful decisions, such as opting in to invasive levels of data tracking), and to regulate the targeting of political ads (which are not permitted to use sensitive personal information). The EU has also agreed on new rules requiring that all political ads “be clearly labeled as such and must indicate who paid for them, how much, to which elections, referendum, or regulatory process they are linked, and whether they have been targeted.”

These are important measures. But I fear they will not come in time for the next EU elections, or for elections in other parts of the world. With democracy already under unprecedented strain worldwide, we may soon bear witness to a major experiment in technologically augmented manipulation.

PS: Should TikTok be banned?

Given China’s intelligence laws, there is ample reason for concern about how corporate data can end up being used by the state. Many in Europe had deep concerns following revelations concerning how the US National Security Agency (NSA) could use tech-company data for intelligence gathering and law enforcement. That never led to a ban, however.

In general, protecting children from addictive apps is a good idea. I would prefer that decisions about banning the use of an app like TikTok be based on transparent security or child-protection rules that apply equally to all.

https://prosyn.org/WK2dFgW